Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, stephen@downes.ca, Casselman Canada

Virtual reality stories can spur environmental action
76614 image icon

According to this article, "Compared to traditional video, environmental stories told through metaverse technologies, including virtual reality and 360-degree video, can better motivate people to act on environmental threats." It's based on a research report (8 page PDF) by Daniel Pimentel and Sriram Kalyanaraman. All very well but it should go without saying that if VR can motivate environmental activism, it can also use misinformation to do the opposite.

Today: 96 Total: 96 Molly Blanchett-U. Oregon, Futurity, 2024/05/20 [Direct Link]
Students Pitted Against ChatGPT to Improve Writing
76613 image icon

To quote Matthew Tower, "A new type of homework assignment: write an essay that is better than ChatGPT's answer to the same prompt." According to the article, "Students in two courses at the University of Nevada, Reno, are going head-to-head with ChatGPT by answering the same prompts as the AI and aiming to get a higher grade." Tower finds this "super compelling" because "You have to 1) understand the assignment, 2) are effectively deterred from using ChatGPT because the ChatGPT answer is a given, and 3) have to think about what makes your response 'better' than the stock robot answer." I have to admit, it's creative, even though the assignment might boil down to finding a better prompt than the one the instructor used.

Today: 100 Total: 100 Lauren Coffey, Inside Higher Ed, 2024/05/20 [Direct Link]
National Labor Relations Board hearing begins for proposed Berea College labor union
76612 image icon

As reported by the Communications Workers of America (CWA), "Berea College is a work college and requires all students to work on campus." Working conditions, however, are terrible. "The schedule left me struggling, sleeping through my morning classes, and failing a class that I had to drop," said one student. "If you get stuck with a certain position and have no voice on the job, there's a correlation and a causal relationship with drop-outs." After several days, the Labor Relations Board hearing has finally started. According to the Chronicle (paywalled) the College calls it an "Existential Threat." If your existence depends on exploiting student labour, then maybe it should be questioned. More: Fox56, which points out the students don't pay tuition; Lex18, which reports "Our message is it is pro-Berea to be pro-union," said Andi Mellon, a Berea student. Berea hasn't charged tuition since 1892 and has an endowment "worth around $1.2 billion, and profits from the investments cover a large portion of what it costs to educate more than 1,600 students.

Today: 97 Total: 97 Shepherd Snyder, WEKU, 2024/05/20 [Direct Link]
Fake science journals
76611 image icon

As Victor Mair reports, "the fake science sickness has infected some of our mainstream publishing  houses." It's easy to blame AI for this, but of course it's not just AI. It's the people abusing AI to try to tap in to the the nearly $30 billion academic publishing industry. And they've always been around. Mair argues that the law should get involved. "Accredited authorities should go on the offensive and work for the enactment of laws and penalties," he writes. "Make these crimes of sham scholarship cost." More effective, I think, would be to take the money out of the system. Colleges and Universities could publish in-house and make the papers open access. Then there's no way to make money out of the system, and the people who pay for the existing system - authors and institutions, who would now be publishing in-house - end up paying a lot less.

Today: 102 Total: 102 Victor Mair, Language Log, 2024/05/20 [Direct Link]
When Online Content Disappears
76610 image icon

Remember when people told you "what's online is forever?" Well, according to this Pew report, "A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible." I have no reason to doubt that, based on my own experience. "23% of news webpages contain at least one broken link, as do 21% of webpages from government sites.... 54% of Wikipedia pages contain at least one link in their "References" section that points to a page that no longer exists.... Nearly one-in-five tweets are no longer publicly visible on the site just months after being posted." Via Dogtrax.

Today: 84 Total: 304 Athena Chapekis, Samuel Bestvater, Emma Remy and Gonzalo Rivero, Pew Research Center, 2024/05/20 [Direct Link]
Toward a Definition of Open Source AI
76609 image icon

Just a quick update from David Wiley on the Open Source Initiative (OSI) attempt to own, er, I mean, steward, the definition of 'open source AI'. Wiley notes "The definition is currently in its eighth draft, with the goal of finalizing the definition by October, 2024." Why am I so sceptical of the OSI initiative? Well, the first sentence contains the phrase "massive benefits accrue" and everything else is based on this, as though there would be no point to open AI if it didn't make money. Yet (ironically) there's no requirement in the four freedoms (use, study, modify, share) that these benefits be distributed equitably, or that they not deprive one of any existing freedom, or cause no harm in their application. Moreover, 'open source AI' is "made available", and not (say) developed openly, or developed by a community; it's as though it's some sort of creation that comes down from on high. Defining 'open' economically, and then cleaving it from social justice and any sort of community-based process, is the sort of 'open' we might find in Silicon Valley, but not really one we want in our own communities.

Today: 17 Total: 368 David Wiley, improving learning, 2024/05/17 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: May 20, 2024 11:37 a.m.

Canadian Flag Creative Commons License.